10 research outputs found

    Single-Trial Recognition of Video Gamer’s Expertise from Brain Haemodynamic and Facial Emotion Responses

    Get PDF
    With an increase in consumer demand of video gaming entertainment, the game industry is exploring novel ways of game interaction such as providing direct interfaces between the game and the gamers’ cognitive or affective responses. In this work, gamer’s brain activity has been imaged using functional near infrared spectroscopy (fNIRS) whilst they watch video of a video game (League of Legends) they play. A video of the face of the participants is also recorded for each of a total of 15 trials where a trial is defined as watching a gameplay video. From the data collected, i.e., gamer’s fNIRS data in combination with emotional state estimation from gamer’s facial expressions, the expertise level of the gamers has been decoded per trial in a multi-modal framework comprising of unsupervised deep feature learning and classification by state-of-the-art models. The best tri-class classification accuracy is obtained using a cascade of random convolutional kernel transform (ROCKET) feature extraction method and deep classifier at 91.44%. This is the first work that aims at decoding expertise level of gamers using non-restrictive and portable technologies for brain imaging, and emotional state recognition derived from gamers’ facial expressions. This work has profound implications for novel designs of future human interactions with video games and brain-controlled games

    A Temporal Type-2 Fuzzy System for Time-dependent Explainable Artificial Intelligence

    Get PDF
    Explainable Artificial Intelligence (XAI) is a paradigm that delivers transparent models and decisions, which are easy to understand, analyze, and augment by a non-technical audience. Fuzzy Logic Systems (FLS) based XAI can provide an explainable framework, while also modeling uncertainties present in real-world environments, which renders it suitable for applications where explainability is a requirement. However, most real-life processes are not characterized by high levels of uncertainties alone; they are inherently time-dependent as well, i.e., the processes change with time. To account for the temporal component associated with a process, in this work, we present novel Temporal Type-2 FLS Based Approach for time-dependent XAI (TXAI) systems, which can account for the likelihood of a measurement’s occurrence in the time domain using (the measurement’s) frequency of occurrence. In Temporal Type-2 Fuzzy Sets (TT2FSs), a four-dimensional (4D) time-dependent membership function is developed where relations are used to construct the inter-relations between the elements of the universe of discourse and its frequency of occurrence. The proposed TXAI system with TT2FSs is exemplified with a step-by-step numerical example and an empirical study using a real-life intelligent environments dataset to solve a time-dependent classification problem (predict whether or not a room is occupied depending on the sensors readings at a particular time of day). The TXAI system performance is also compared with other state-of-the-art classification methods with varying levels of explainability. The TXAI system manifested better classification prowess, with 10-fold test datasets, with a mean recall of 95.40% than a standard XAI system (based on non-temporal general type-2 (GT2) fuzzy sets) that had a mean recall of 87.04%. TXAI also performed significantly better than most non-explainable AI systems between 3.95%, to 19.04% improve- ment gain in mean recall. Temporal convolution network (TCN) was marginally better than TXAI (by 1.98% mean recall improvement) although with a major computational complexity. In addition, TXAI can also outline the most likely time-dependent trajectories using the frequency of occurrence values embedded in the TXAI model; viz. given a rule at a determined time interval, what will be the next most likely rule at a subsequent time interval. In this regard, the proposed TXAI system can have profound implications for delineating the evolution of real-life time-dependent processes, such as behavioural or biological process

    Recognition of Patient Groups with Sleep Related Disorders using Bio-signal Processing and Deep Learning

    Get PDF
    Accurately diagnosing sleep disorders is essential for clinical assessments and treatments. Polysomnography (PSG) has long been used for detection of various sleep disorders. In this research, electrocardiography (ECG) and electromayography (EMG) have been used for recognition of breathing and movement-related sleep disorders. Bio-signal processing has been performed by extracting EMG features exploiting entropy and statistical moments, in addition to developing an iterative pulse peak detection algorithm using synchrosqueezed wavelet transform (SSWT) for reliable extraction of heart rate and breathing-related features from ECG. A deep learning framework has been designed to incorporate EMG and ECG features. The framework has been used to classify four groups: healthy subjects, patients with obstructive sleep apnea (OSA), patients with restless leg syndrome (RLS) and patients with both OSA and RLS. The proposed deep learning framework produced a mean accuracy of 72% and weighted F1 score of 0.57 across subjects for our formulated four-class problem

    Towards Understanding Human Functional Brain Development with Explainable Artificial Intelligence: Challenges and Perspectives

    Get PDF
    The last decades have seen significant advancements in non-invasive neuroimaging technologies that have been increasingly adopted to examine human brain development. However, these improvements have not necessarily been followed by more sophisticated data analysis measures that are able to explain the mechanisms underlying functional brain development. For example, the shift from univariate (single area in the brain) to multivariate (multiple areas in brain) analysis paradigms is of significance as it allows investigations into the interactions between different brain regions. However, despite the potential of multivariate analysis to shed light on the interactions between developing brain regions, artificial intelligence (AI) techniques applied render the analysis non-explainable. The purpose of this paper is to understand the extent to which current state-of-the-art AI techniques can inform functional brain development. In addition, a review of which AI techniques are more likely to explain their learning based on the processes of brain development as defined by developmental cognitive neuroscience (DCN) frameworks is also undertaken. This work also proposes that eXplainable AI (XAI) may provide viable methods to investigate functional brain development as hypothesised by DCN frameworks

    Explainable Artificial Intelligence Based Analysis for Developmental Cognitive Neuroscience

    Get PDF
    In the last decades, non-invasive and portable neuroimaging techniques, such as functional near infrared spectroscopy (fNIRS), have allowed researchers to study the mechanisms underlying the functional cognitive development of the human brain, thus furthering the potential of Developmental Cognitive Neuroscience (DCN). However, the traditional paradigms used for the analysis of infant fNIRS data are still quite limited. Here, we introduce a multivariate pattern analysis for fNIRS data, xMVPA, that is powered by eXplainable Artificial Intelligence (XAI). The proposed approach is exemplified in a study that investigates visual and auditory processing in six-month-old infants. xMVPA not only identified patterns of cortical interactions, which confirmed the existent literature; in the form of conceptual linguistic representations, it also provided evidence for brain networks engaged in the processing of visual and auditory stimuli that were previously overlooked by other methods, while demonstrating similar statistical performance

    A Generic Deep Learning Based Cough Analysis System from Clinically Validated Samples for Point-of-Need Covid-19 Test and Severity Levels

    Get PDF
    We seek to evaluate the detection performance of a rapid primary screening tool of Covid-19 solely based on the cough sound from 8,380 clinically validated samples with laboratory molecular-test (2,339 Covid-19 positive and 6,041 Covid-19 negative). Samples were clinically labelled according to the results and severity based on quantitative RT-PCR (qRT-PCR) analysis, cycle threshold and lymphocytes count from the patients. Our proposed generic method is a algorithm based on Empirical Mode Decomposition (EMD) with subsequent classification based on a tensor of audio features and deep artificial neural network classifier with convolutional layers called DeepCough'. Two different versions of DeepCough based on the number of tensor dimensions, i.e. DeepCough2D and DeepCough3D, have been investigated. These methods have been deployed in a multi-platform proof-of-concept Web App CoughDetect to administer this test anonymously. Covid-19 recognition results rates achieved a promising AUC (Area Under Curve) of 98.800.83%, sensitivity of 96.431.85%, and specificity of 96.201.74%, and 81.08%5.05% AUC for the recognition of three severity levels. Our proposed web tool and underpinning algorithm for the robust, fast, point-of-need identification of Covid-19 facilitates the rapid detection of the infection. We believe that it has the potential to significantly hamper the Covid-19 pandemic across the world

    Explainable artificial intelligence for functional brain development analysis: methods and applications

    Get PDF
    In the last decades, non-invasive and portable neuroimaging techniques, such as functional Near- Infrared Spectroscopy (fNIRS), have allowed researchers to study the mechanisms underlying the functional development of the human brain, thus furthering the potential of Developmental Cognitive Neuroscience (DCN). However, the traditional methods used for the analysis of infant fNIRS data are still quite limited. Here, I introduce new Fuzzy Cognitive Maps, called EFCMs, for Effective Connectivity (EC) analysis of infants’ fNIRS data. EFCMs can outline the interconnections between the cortical areas as well as specify the direction of EC. In contrast, to shed light on the activation level of the cortical regions, I developed a Multivariate Pattern Analysis (MVPA). The proposed MVPA is powered by eXplainable Artificial Intelligence (XAI), named eXplainable MVPA (xMVPA). The xMVPA is exemplified in a DCN study that investigates visual and auditory processing in six- month-old infants with a classification accuracy of 67.69 %. The xMVPA can identify patterns of cortical interactions formed in response to presented stimuli as hypothesised by the DCN frameworks. However, xMVPA can only analyse cross-sectional DCN studies, i.e. it is not able to analyse the temporal dynamics associated with a longitudinal DCN study. To this end, I developed a novel time-dependent XAI (TXAI) system based on Temporal Type-2 Fuzzy Sets (TT2FS). The TXAI system is exemplified on an empirical study using a real-life intelligent environments dataset to solve a time-dependent classification problem and attained a classification accuracy of 94.08%. The proposed TXAI system has the potential to inform the evolution of a process (such as functional brain development) using temporal trajectories which in turn may assist in the delineation of brain developmental trajectories

    A Type-2 Fuzzy Logic Based Explainable Artificial Intelligence System for Developmental Neuroscience

    No full text
    Research in developmental cognitive neuroscience face challenges associated not only with their population (infants and children who might not be too willing to cooperate) but also in relation to the limited choice of neuroimaging techniques that can non-invasively record brain activity. For example, magnetic resonance imaging (MRI) studies are unsuitable for developmental cognitive studies because they require participants to stay still for a long time in a noisy environment. In this regard, functional Near-infrared spectroscopy (fNIRS) is a fast-emerging de-facto neuroimaging standard for recording brain activity of young infants. However, the absence of associated anatomical image, and a standard technical framework for fNIRS data analysis remains a significant impediment to advancement in gaining insights into the workings of developing brains. To this end, this work presents an Explainable Artificial Intelligence (XAI) system for infant's fNIRS data using a multivariate pattern analysis (MVPA) driven by a genetic algorithm (GA) type-2 Fuzzy Logic System (FLS) for classification of infant's brain activity evoked by different stimuli. This work contributes towards laying the foundation for a transparent fNIRS data analysis that holds the potential to enable researchers to map the classification result to the corresponding brain activity pattern which is of paramount significance in understanding how developing human brain functions

    Fuzzy Temporal Convolutional Neural Networks in P300-based Brain-Computer Interface for Smart Home Interaction

    No full text
    The processing and classification of electroencephalographic signals (EEG) are increasingly performed using deep learning frameworks, such as convolutional neural networks (CNNs), to generate abstract features from brain data, automatically paving the way for remarkable classification prowess. However, EEG patterns exhibit high variability across time and uncertainty due to noise. It is a significant problem to be addressed in P300-based Brain Computer Interface (BCI) for smart home interaction. It operates in a non-optimal natural environment where added noise is often present and is also white. In this work, we propose a sequential unification of temporal convolutional networks (TCNs) modified to EEG signals, LSTM cells, with a fuzzy neural block (FNB), we called EEG-TCFNet. Fuzzy components may enable a higher tolerance to noisy conditions. We applied three different architectures comparing the effect of using block FNB to classify a P300 wave to build a BCI for smart home interaction with healthy and post-stroke individuals. Our results reported a maximum classification accuracy of 98.6% and 74.3% using the proposed method of EEG-TCFNet in subject-dependent strategy and subject-independent strategy, respectively. Overall, FNB usage in all three CNN topologies outperformed those without FNB. In addition, we compared the addition of FNB to other state-of-the-art methods and obtained higher classification accuracies on account of the integration with FNB. The remarkable performance of the proposed model, EEG-TCFNet, and the general integration of fuzzy units to other classifiers would pave the way for enhanced P300-based BCIs for smart home interaction within natural settings
    corecore